Startup Runs Spiking <span style='color:red'>Neural Net</span>work on Arm
  Eta Compute, a startup that demonstrated last summer at Hot Chips a very low power microcontroller using asynchronous technology, has come up with a new spin that it calls “the industry’s first neuromorphic platform.”  In announcing the availability of its latest SoC platform IC based on TSMC's 55nm ULP process, Paul Washkewicz, vice president of marketing and a co-founder of Eta Compute, Wednesday (March 15) pitched it as an ideal platform for “delivering neuromorphic computing based machine intelligence to mobile and edge devices.”  But wait. When did Eta Compute’s 0.25V IoT chip, from Hot Chips last year, become a “neuromorphic computing” engine? Did the startup pivot slightly in strategy? Washkewicz explained that Eta ventured down the path of “machine intelligence,” when “customers started telling us that they want a little bit more intelligence on the edge.”  Two pillars of Eta Compute’s claim for neuromorphic computing consist of an event-driven, no-clock delay insensitive asynchronous logic (DIAL) architecture for its hardware and the company’s spiking neural network software.  Eta Compute (Westlake Village, Calif.) had already developed DIAL, which uses a novel handshake to wake up circuits resting at a very low power level. The system quickly turns on devices without the set-up and wait times usually required for synchronous circuits.  The key for startup’s course correction was adding more intelligence, in part through luring Nara Srinivasa away from Intel last fall. Srinivasa was senior principal engineer and chief scientist at Intel Labs. At Intel, Srinivasa was working to develop self-learning, neuromorphic architecture, seeking to tackle a broader class of AI problems.  Last September when Srinivasa was still at Intel, Intel announced a neuromorphic artificial intelligence test chip named Loihi. Intel said the test chip was designed to mimic brain functions by absorbing data gained from its environment.  Srinivasa, now CTO of Eta Compute, said in a statement: “Our patented event driven processor architecture, DIAL, is combined with our fully customizable neuromorphic algorithms.” He said, “These will be the foundation of a diverse and wide-ranging set of applications that deliver machine intelligence to the network edge.”  The concept of a computer that mimics the brain isn't new. But Intel’s Loihi AI chip should not be confused with Eta Comupte’s IP, because the startup is not offering exactly that. Eta Compute’s intention is to marry asynchronous, no-clock, event-driving DIAL hardware architecture with spiking neural model-based algorithms developed by Srinvasa and his team, explained Washkewicz.  Linley Gwennap, principal analyst at the Linley Group, is skeptical. Admitting that he hasn’t talked to Eta, he deemed its announcement “confusing.”  Gwennap explained, “Neuromorphic computing refers to the use of integrated circuits to mimic the structure of the brain. Typical neuromorphic systems feature artificial neurons that together can implement certain types of neural networks. Naturally, this approach to computing is completely different from traditional Von Neumann CPUs such as Cortex-M3.”  Very Low-power embedded processor  In the end, what Eta Compute is really announcing is “a very low power Cortex-M3 CPU design and some other very low power IP cores,” Gwennap suspected. After all, “We do see growing demand for low-power processors for remote IoT sensors and for certain wearable designs. This is a small market today but could become sizable as IoT becomes more popular,” he added.  Included in Eta’s new 55nm IP portfolio are an Arm Cortex-M3 processor, NXP-developed CoolFlux DSP, 12-bit SAR analog-to-digital converter, power management voltage references optimized to deliver high efficiency voltage scaling and support for low power analog blocks.  During an interview, Washkewicz told EE Times that Eta Compute’s asynchronous Cortex-M3 embedded processor at 65MHz runs on 2 milliwatts (mW). Eta Compute added NXP’s DSP to its IP portfolio, he noted, because machine intelligence requires certain signal processing, and NXP’s CoolFlux is a very low power, asynchronous DSP.  Eta Compute is not offering, at this point, a full-fledged software development environment for its licensees. Instead, “we are following almost an ASIC-like model,” said Washkewicz, “by hardwiring AI engine in ARM processor.” Meanwhile, training is done by Eta Compute’s inhouse software development team.  Eta Compute is already sampling its new IP platform. The startup has “a few lead customers” who are all exploring ways to integrate IP cores into their silicon, Washkewicz said.  Applications of such chips range from speech, sensor fusion to wearable, heart-rate monito and images (face identification but not face recognition), Washkeqicz explained.  Asked about potential licensees, he said, “We can help any users of Arm MCUs, looking to improve intelligence.” Further, he noted that ASIC houses and Chinese fabless chip vendors are good targets.
Key word:
Release time:2018-03-19 00:00 reading:1224 Continue reading>>
Google Fellow: <span style='color:red'>Neural Net</span>s Need Optimized Hardware
  If you aren't currently considering how to use deep neural networks to solve your problems, you almost certainly should be, according to Jeff Dean, a Google senior fellow and leader of the deep learning artificial intelligence research project known as Google Brain.  In a keynote address at the Hot Chips conference here Tuesday (Aug. 22), Dean outlined how deep neural nets are dramatically reshaping computational devices and making significant strides in speech, vision, search, robotics and healthcare, among other areas. He said hardware systems optimized for performing a small handful of specific operations that make up the vast majority of machine learning models would create more powerful neural networks.  "Building specialized computers for the properties that neural nets have makes a lot of sense," Dean said. "If you can produce a system that is really good at doing very specific [accelerated low-precision linear algebra] operations, that's what we want."  Of the 14 Grand Challenges for Engineering in the 21st Century identified by the National Academy of Engineering in 2008, Dean believes that neural networks can play an integral role in solving five — including restoring and improving urban architecture, advancing health informatics engineering better medicines and reverse engineering the human brain. But Dean said neural networks offer the greatest potential for helping to solve the final challenge on the NAE's list: engineering the tools for scientific discovery.  "People have woken up to the idea that we need more computational power for a lot of these problems," Dean said.  Google recently began giving to customers and researchers access to the second-generation of its TensorFlow processing unit (TPU) machine-learning ASIC through a cloud service. A custom accelerator board featuring four of the second-generation devices boasts 180 teraflops of computation and 64 GB of High Bandwidth Memory (HBM).  Dean said the devices is designed to be connected together into larger configurations — a "TPU pod" featuring 64 second-generation TPUs, cable of 11.5 petaflops and offering 4 terabytes of HBM memory. He added that Google is making available 1,000 Cloud TPUs for free to top researchers who are committed to open machine learning research.  "We are pretty excited about the possibilities of the pod for solving bigger problems," Dean said.  In 2015, Google released its TensorFlow software library for machine learning to open source with a goal of establishing a common platform for expressing machine learning ideas and systems. Dean showed a chart demonstrating that TensorFlow in just over a year and a half has become far more popular than other libraries with similar uses.  "It's been pretty rewarding to have this rather large community now crop up," Dean said.  The rise of neural networks — which has accelerated greatly over the past five years — has been made possible by tremendous advances in compute power over the past 20 years, Dean said. He added that he actually wrote a thesis about neural networks in 1990. He believed at the time that neural networks were not far off from being viable, needing only about 60 times more compute power than was available then.  "It turned out that what we really needed was about 1 million times more compute power, not 60," Dean said.
Release time:2017-08-24 00:00 reading:1348 Continue reading>>
ARM SoCs Take Soft Roads to <span style='color:red'>Neural Net</span>s
  NXP is supporting inference jobs such as image recognition in software on its i.MX8 processor. It aims to extend its approach for natural-language processing later this year, claiming that dedicated hardware is not required in resource-constrained systems.  The chip vendor is following in the footsteps of its merger partner, Qualcomm. However, the mobile giant expects to eventually augment its code with dedicated hardware. Their shared IP partner, ARM, is developing neural networking libraries for its cores, although it declined an interview for this article.  NXP’s i.MX8 packs two GPU cores from Vivante, now part of Verisilicon. They use about 20 opcodes that support multiply-accumulates and bit extraction and replacement, originally geared for running computer vision.  “Adding more and more hardware is not the way forward on the power budget of a 5-W SoC,” said Geoff Lees, NXP’s executive vice president for i.MX. “I would like to double the Flops, but we got the image processing acceleration we wanted for facial and gesture recognition and better voice accuracy.”  The software is now in use with NXP’s lead customers for image-recognition jobs. Meanwhile, Verisilicon and NXP are working on additional extensions to the GPU shader pipeline targeting natural-language processing. They hope to have the code available by the end of the year.  “Our VX extensions were not originally viewed as a neural network accelerator, but we found [that] they work extraordinarily well … the math isn’t much different,” said Thomas “Rick” Tewell, vice president of system solutions at Verisilicon.  The GPU cores come with OpenCL drivers. “No one has to touch the instruction extensions … people don’t want to get locked into an architecture or tool set; they want to train a set of engineers who are interchangeable.”  ARM is taking a similar approach with its ARM Compute Library, released in March to run neural net tasks on its Cortex-A and Mali cores.  “It doesn’t have a lot of features yet and only supports single-precision math — we’d prefer 8-bit — but I know ARM is working on it,” said a Baidu researcher working on its neural net benchmark. “It also lacks support for recurrent neural nets, but most libraries still lack this.”  For its part, Qualcomm released earlier this year its Snapdragon 820 Neural Processing Engine SDK. It supports jobs run on the SoC’s CPU, GPU, and DSP and includes Hexagon DSP vector extensions to run 8-bit math for neural nets.  “Long-term, there could be a need for dedicated hardware,” said Gary Brotman, director of product management for commercial machine-learning products at Qualcomm. “We have work in the lab today but have not discussed a time-to-market.”  The code supports a variety of neural nets, including LSTMs often used for audio processing. Both NXP and Qualcomm execs said that it’s still early days for availability of good data sets to train models for natural-language processing. “Audio is the next frontier,” said Brotman.
Key word:
Release time:2017-06-30 00:00 reading:1190 Continue reading>>
Baidu Upgrades <span style='color:red'>Neural Net</span> Benchmark
  Baidu updated its open-source benchmark for neural networks, adding support for inference jobs and support for low-precision math.DeepBench provides a target for optimizing chips that help data centers build larger and, thus, more accurate models for jobs such as image and natural-language recognition.  The work shows that it’s still early days for neural nets. So far, results running the training version of the spec launched last September are only available on a handful of Intel Xeon and Nvidia graphics processors.  Results for the new benchmark on server-based inference jobs should be available on those chips soon. In addition, Baidu is releasing results on inference jobs run on devices including the iPhone 6, iPhone 7, and a Raspberry Pi board.  Inference in the server has longer latency but can use larger processors and more memory than is available in embedded devices like smartphones and smart speakers. “We’ve tried to avoid drawing big conclusions; so far, we’re just compiling results,” said Sharan Narang, a systems researcher at Baidu’s Silicon Valley AI Lab.  At press time, it was not clear whether Intel would have inference results for the release today, and it is still working on results for its massively parallel Knights Mill. AMD expressed support for the benchmark but has yet to release results running it on its new Epyc x86 and Radeon Instinct GPUs.  A handful of startups including Corenami, Graphcore, Wave Computing, and Nervana — acquitted by Intel — have plans for deep-learning accelerators.  “Chip makers are very excited about this and want to showcase their results, [but] we don’t want any use of proprietary libraries, only open ones, so these things take a lot of effort,” said Narang. “We’ve spoken to Nervana, Graphcore, and Wave, and they all have promising approaches, but none can benchmark real silicon yet.”  The updated DeepBench supports lower-precision floating-point operations and sparse operations for inference to boost performance.  “There’s a clear correlation in deep learning of larger models and larger data sets getting better accuracy in any app, so we want to build the largest possible models,” he said. “We need larger processors, reduced-precision math, and other techniques we’re working on to achieve that goal.”
Key word:
Release time:2017-06-29 00:00 reading:1194 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
RB751G-40T2R ROHM Semiconductor
MC33074DR2G onsemi
CDZVT2R20B ROHM Semiconductor
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
model brand To snap up
STM32F429IGT6 STMicroelectronics
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BU33JA2MNVX-CTL ROHM Semiconductor
TPS63050YFFR Texas Instruments
ESR03EZPJ151 ROHM Semiconductor
BP3621 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code